Goto

Collaborating Authors

 careful reading



We thank the reviewers for their careful reading of our work and for their helpful comments

Neural Information Processing Systems

We thank the reviewers for their careful reading of our work and for their helpful comments. We will also clarify that the text in sections 2.1 and 2.2 In terms of experimental predictions, our work predicts the synaptic weights in the SFA circuit. One mechanism for implementing a quadratic expansion are so-called "Sigma-Pi units" (Rumelhart, Hinton and (Mel and Koch, 1990). In this case, the derivation proceeds exactly as laid out in the paper. Thank you for pointing out the typos.


We thank the reviewers for their careful reading of the manuscript and their constructive suggestions

Neural Information Processing Systems

We thank the reviewers for their careful reading of the manuscript and their constructive suggestions. Computation (MPC), HE supports non-interactive operations and greatly reduces the communication cost. One of our baselines CHET [9] claims it has better performance than prior works like E2DM. We will add E2DM into related work and compare it against our Falcon in the revised version. Falcon uses non-interactive HE setting.


We thank the reviewers for their careful reading of the manuscript and their constructive suggestions

Neural Information Processing Systems

We thank the reviewers for their careful reading of the manuscript and their constructive suggestions. Chimera supports the switching between BFV and TFHE, while Glyph enables the switching between BGV and TFHE. Some users may not have such large network bandwidth. In contrast, Glyph first trains a CNN network model by a plaintext public dataset. Except sending the encrypted input data, the training of Glyph does not involve the client.



We thank all of the reviewers for their time, careful reading, and valuable feedback

Neural Information Processing Systems

We thank all of the reviewers for their time, careful reading, and valuable feedback. Indeed, we verify that the assumption holds for the datasets used in the experiments (see Section 1.3 of the The distinction between norms is another good point. " means computing null E null In Figure 2, the quantities from Eq. (11) are hypothesized We understand that spectral methods are not used in some engineering settings.


We are very grateful to the three reviewers for their careful reading of our paper and for their insightful comments and

Neural Information Processing Systems

Our responses are below with the reviewers' comments in italics. "The paper needs more details on the experiments..." "Give a clearer motivation for how embeddings were constrained in 4.1...the authors do not show why this We make no claim that this particular solution subset is optimal (or even at all good). "Fig 2 (a,b), the red lines do not match when "Fig 3, results for upper triangular ...have better performance than diagonal ones...[Intuition?]...What are red's craving in answering the "meta question" of why embeddings work. To our knowledge we are the first to do this. "Previous methodologies...implicity address [non-identifiability] already, as they place implicit constraints (like But U and V are non-square so cannot be symmetric.


We thank all the Reviewers for a careful reading of our paper and for providing useful suggestions for improvements, 1 which we will be happy to implement in the camera-ready version

Neural Information Processing Systems

As we state at the beginning of Sec. 2, Theorems 1 and 3 hold for any activation function, including We will clarify this point in the camera-ready version. Figure 1: Histogram of correctly and incorrectly classified pictures shows that trained neural networks are far more likely to misclassify points closer to a classification boundary for both the training and test sets. Results are aggregated across 20 different trained neural networks. We will move the MNIST results to the main paper swapping them with the detailed proofs and modify Sec. We will add in the camera-ready version a discussion on the convergence rate to the Gaussian probability distribution.


We thank the reviewers for their careful reading of our work and for their helpful comments

Neural Information Processing Systems

We thank the reviewers for their careful reading of our work and for their helpful comments. We will also clarify that the text in sections 2.1 and 2.2 In terms of experimental predictions, our work predicts the synaptic weights in the SFA circuit. One mechanism for implementing a quadratic expansion are so-called "Sigma-Pi units" (Rumelhart, Hinton and (Mel and Koch, 1990). In this case, the derivation proceeds exactly as laid out in the paper. Thank you for pointing out the typos.